Goto

Collaborating Authors

 information preservation


What Matters in Graph Class Incremental Learning? An Information Preservation Perspective

Neural Information Processing Systems

Graph class incremental learning (GCIL) requires the model to classify emerging nodes of new classes while remembering old classes. Existing methods are designed to preserve effective information of old models or graph data to alleviate forgetting, but there is no clear theoretical understanding of what matters in information preservation. In this paper, we consider that present practice suffers from high semantic and structural shifts assessed by two devised shift metrics. We provide insights into information preservation in GCIL and find that maintaining graph information can preserve information of old models in theory to calibrate node semantic and graph structure shifts. We correspond graph information into low-frequency local-global information and high-frequency information in spatial domain.


What Matters in Graph Class Incremental Learning An Information Preservation Perspective

Neural Information Processing Systems

Graph class incremental learning (GCIL) requires the model to classify emerging nodes of new classes while remembering old classes. Existing methods are designed to preserve effective information of old models or graph data to alleviate forgetting, but there is no clear theoretical understanding of what matters in information preservation.


What Matters in Graph Class Incremental Learning An Information Preservation Perspective

Neural Information Processing Systems

Graph class incremental learning (GCIL) requires the model to classify emerging nodes of new classes while remembering old classes. Existing methods are designed to preserve effective information of old models or graph data to alleviate forgetting, but there is no clear theoretical understanding of what matters in information preservation.


What Matters in Graph Class Incremental Learning? An Information Preservation Perspective

Neural Information Processing Systems

Graph class incremental learning (GCIL) requires the model to classify emerging nodes of new classes while remembering old classes. Existing methods are designed to preserve effective information of old models or graph data to alleviate forgetting, but there is no clear theoretical understanding of what matters in information preservation. In this paper, we consider that present practice suffers from high semantic and structural shifts assessed by two devised shift metrics. We provide insights into information preservation in GCIL and find that maintaining graph information can preserve information of old models in theory to calibrate node semantic and graph structure shifts. We correspond graph information into low-frequency local-global information and high-frequency information in spatial domain.


A New HOPE: Domain-agnostic Automatic Evaluation of Text Chunking

Brådland, Henrik, Goodwin, Morten, Andersen, Per-Arne, Nossum, Alexander S., Gupta, Aditya

arXiv.org Artificial Intelligence

Document chunking fundamentally impacts Retrieval-Augmented Generation (RAG) by determining how source materials are segmented before indexing. Despite evidence that Large Language Models (LLMs) are sensitive to the layout and structure of retrieved data, there is currently no framework to analyze the impact of different chunking methods. In this paper, we introduce a novel methodology that defines essential characteristics of the chunking process at three levels: intrinsic passage properties, extrinsic passage properties, and passages-document coherence. We propose HOPE (Holistic Passage Evaluation), a domain-agnostic, automatic evaluation metric that quantifies and aggregates these characteristics. Our empirical evaluations across seven domains demonstrate that the HOPE metric correlates significantly (p > 0.13) with various RAG performance indicators, revealing contrasts between the importance of extrinsic and intrinsic properties of passages. Semantic independence between passages proves essential for system performance with a performance gain of up to 56.2% in factual correctness and 21.1% in answer correctness. On the contrary, traditional assumptions about maintaining concept unity within passages show minimal impact. These findings provide actionable insights for optimizing chunking strategies, thus improving RAG system design to produce more factually correct responses.


Can SGD Select Good Fishermen? Local Convergence under Self-Selection Biases and Beyond

Kalavasis, Alkis, Mehrotra, Anay, Zhou, Felix

arXiv.org Machine Learning

We revisit the problem of estimating $k$ linear regressors with self-selection bias in $d$ dimensions with the maximum selection criterion, as introduced by Cherapanamjeri, Daskalakis, Ilyas, and Zampetakis [CDIZ23, STOC'23]. Our main result is a $\operatorname{poly}(d,k,1/\varepsilon) + {k}^{O(k)}$ time algorithm for this problem, which yields an improvement in the running time of the algorithms of [CDIZ23] and [GM24, arXiv]. We achieve this by providing the first local convergence algorithm for self-selection, thus resolving the main open question of [CDIZ23]. To obtain this algorithm, we reduce self-selection to a seemingly unrelated statistical problem called coarsening. Coarsening occurs when one does not observe the exact value of the sample but only some set (a subset of the sample space) that contains the exact value. Inference from coarse samples arises in various real-world applications due to rounding by humans and algorithms, limited precision of instruments, and lag in multi-agent systems. Our reduction to coarsening is intuitive and relies on the geometry of the self-selection problem, which enables us to bypass the limitations of previous analytic approaches. To demonstrate its applicability, we provide a local convergence algorithm for linear regression under another self-selection criterion, which is related to second-price auction data. Further, we give the first polynomial time local convergence algorithm for coarse Gaussian mean estimation given samples generated from a convex partition. Previously, only a sample-efficient algorithm was known due to Fotakis, Kalavasis, Kontonis, and Tzamos [FKKT21, COLT'21].


Persistent Homology-induced Graph Ensembles for Time Series Regressions

Nguyen, Viet The, Pham, Duy Anh, Le, An Thai, Peter, Jans, Gust, Gunther

arXiv.org Artificial Intelligence

The effectiveness of Spatio-temporal Graph Neural Networks (STGNNs) in time-series applications is often limited by their dependence on fixed, hand-crafted input graph structures. Motivated by insights from the Topological Data Analysis (TDA) paradigm, of which real-world data exhibits multi-scale patterns, we construct several graphs using Persistent Homology Filtration -- a mathematical framework describing the multiscale structural properties of data points. Then, we use the constructed graphs as an input to create an ensemble of Graph Neural Networks. The ensemble aggregates the signals from the individual learners via an attention-based routing mechanism, thus systematically encoding the inherent multiscale structures of data. Four different real-world experiments on seismic activity prediction and traffic forecasting (PEMS-BAY, METR-LA) demonstrate that our approach consistently outperforms single-graph baselines while providing interpretable insights.


Why Sample Space Matters: Keyframe Sampling Optimization for LiDAR-based Place Recognition

Stathoulopoulos, Nikolaos, Sumathy, Vidya, Kanellakis, Christoforos, Nikolakopoulos, George

arXiv.org Artificial Intelligence

Recent advances in robotics are pushing real-world autonomy, enabling robots to perform long-term and large-scale missions. A crucial component for successful missions is the incorporation of loop closures through place recognition, which effectively mitigates accumulated pose estimation drift. Despite computational advancements, optimizing performance for real-time deployment remains challenging, especially in resource-constrained mobile robots and multi-robot systems since, conventional keyframe sampling practices in place recognition often result in retaining redundant information or overlooking relevant data, as they rely on fixed sampling intervals or work directly in the 3D space instead of the feature space. To address these concerns, we introduce the concept of sample space in place recognition and demonstrate how different sampling techniques affect the query process and overall performance. We then present a novel keyframe sampling approach for LiDAR-based place recognition, which focuses on redundancy minimization and information preservation in the hyper-dimensional descriptor space. This approach is applicable to both learning-based and handcrafted descriptors, and through the experimental validation across multiple datasets and descriptor frameworks, we demonstrate the effectiveness of our proposed method, showing it can jointly minimize redundancy and preserve essential information in real-time. The proposed approach maintains robust performance across various datasets without requiring parameter tuning, contributing to more efficient and reliable place recognition for a wide range of robotic applications.


Joint Information Preservation for Heterogeneous Domain Adaptation

Xu, Peng, Deng, Zhaohong, Choi, Kup-Sze, Wang, Jun, Wang, Shitong

arXiv.org Machine Learning

Domain adaptation aims to assist the modeling tasks of the target domain with knowledge of the source domain. The two domains often lie in different feature spaces due to diverse data collection methods, which leads to the more challenging task of heterogeneous domain adaptation (HDA). A core issue of HDA is how to preserve the information of the original data during adaptation. In this paper, we propose a joint information preservation method to deal with the problem. The method preserves the information of the original data from two aspects. On the one hand, although paired samples often exist between the two domains of the HDA, current algorithms do not utilize such information sufficiently. The proposed method preserves the paired information by maximizing the correlation of the paired samples in the shared subspace. On the other hand, the proposed method improves the strategy of preserving the structural information of the original data, where the local and global structural information are preserved simultaneously. Finally, the joint information preservation is integrated by distribution matching. Experimental results show the superiority of the proposed method over the state-of-the-art HDA algorithms.


Degeneration in VAE: in the Light of Fisher Information Loss

Zheng, Huangjie, Yao, Jiangchao, Zhang, Ya, Tsang, Ivor W.

arXiv.org Machine Learning

Variational Autoencoder (VAE) is one of the most popular generative models, and enormous advances have been explored in recent years. Due to the increasing complexity of the raw data and the model architecture, deep networks are needed in VAE models while few works discuss their impacts. According to our observation, VAE does not always benefit from deeper architecture: 1) Deeper encoder makes VAE learn more comprehensible latent representations, while results in blurry reconstruction samples; 2) Deeper decoder ensures more high-quality generations, while the latent representations become abstruse; 3) When encoder and decoder both go deeper, abstruse latent representation occurs with blurry reconstruction samples at same time. In this paper, we deduce a Fisher information measure for the corresponding analysis. With such measure, we demonstrate that information loss is ineluctable in feed-forward networks and causes the previous three types of degeneration, especially when the network goes deeper. We also demonstrate that skip connections benefit the preservation of information amount, thus propose a VAE enhanced by skip connections, named SCVAE. In the experiments, SCVAE is shown to mitigate the information loss and to achieve a promising performance in both encoding and decoding tasks. Moreover, SCVAE can be adaptive to other state-of-the-art variants of VAE for further amelioration.